Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
4-bit efficient quantization
# 4-bit efficient quantization
Qwen3 8b 192k Context 6X Josiefied Uncensored MLX AWQ 4bit
Apache-2.0
The 4-bit AWQ quantized version of Qwen3-8B, optimized for the MLX library, supports 192k token long context processing, suitable for edge device deployment.
Large Language Model
Q
Goraint
204
1
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase